39 research outputs found

    Prospective observational study of vancomycin injection in SLED patient of ethnic Indians

    Get PDF
    As the Vancomycin is itself a nephrotoxic antibiotics, so it is sometime recommended to the Slow-low Efficiency Dialysis (SLED) patients against highly resisted infection. In this case, the dose monitoring is strictly maintained after Intravenous injection. The collected blood was analyzed for its concentration in HPLC for 11 patients and the half life was evaluated to study Therapeutic drug monitoring. The T1/2 of evaluated vancomycin is 39.12+ 6.81 hrs. The mean of the systemic clearance is 16.91+6.99 and mean Vd is 0.57+ 0.147. Comparatively the reported study of Mean + SD of half-life, volume of distribution, and systemic clearance were 43.1 + 21.6 hours, 0.84 L/kg + 0.17 L/kg, and 24.3 mL/min + 8.39 mL/min respectively. Thus the t-test of the means was 0.5828, degree of freedom (df) was 20, standard error of difference was 6.829 and so, the two-tailed P value is 0.5665 i.e. P > 0.5. In ethnic Indian SLED patients, T1/2 of mean + SD of 39.12 + 6.81 hrs was compared to the Caucasian patients i.e, 43.1 + 21.6 hrs. And the t-test and P-value is 0.5828 & 0.5665 respectively. Thus it was concluded that the half-life of ethnic Indian patients is less in compare to Caucasians but this difference is not so significant. The half-life of ethnic 8 patients is less than 40 out of 11 patients.Keywords: Vancomycin assay; Slow-low efficiency dialysis; Pharmacokinetic analysis; Ethnic indian

    (1,j)(1,j)-set problem in graphs

    Full text link
    A subset D⊆VD \subseteq V of a graph G=(V,E)G = (V, E) is a (1,j)(1, j)-set if every vertex v∈V∖Dv \in V \setminus D is adjacent to at least 11 but not more than jj vertices in D. The cardinality of a minimum (1,j)(1, j)-set of GG, denoted as γ(1,j)(G)\gamma_{(1,j)} (G), is called the (1,j)(1, j)-domination number of GG. Given a graph G=(V,E)G = (V, E) and an integer kk, the decision version of the (1,j)(1, j)-set problem is to decide whether GG has a (1,j)(1, j)-set of cardinality at most kk. In this paper, we first obtain an upper bound on γ(1,j)(G)\gamma_{(1,j)} (G) using probabilistic methods, for bounded minimum and maximum degree graphs. Our bound is constructive, by the randomized algorithm of Moser and Tardos [MT10], We also show that the (1,j)(1, j)- set problem is NP-complete for chordal graphs. Finally, we design two algorithms for finding γ(1,j)(G)\gamma_{(1,j)} (G) of a tree and a split graph, for any fixed jj, which answers an open question posed in [CHHM13]

    Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning

    Full text link
    Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend on the ability of the LLM to simultaneously decompose and solve the problem. A significant disadvantage is that foundational LLMs are typically not available for fine-tuning, making adaptation computationally prohibitive. We believe (and demonstrate) that problem decomposition and solution generation are distinct capabilites, better addressed in separate modules, than by one monolithic LLM. We introduce DaSLaM, which uses a decomposition generator to decompose complex problems into subproblems that require fewer reasoning steps. These subproblems are answered by a solver. We use a relatively small (13B parameters) LM as the decomposition generator, which we train using policy gradient optimization to interact with a solver LM (regarded as black-box) and guide it through subproblems, thereby rendering our method solver-agnostic. Evaluation on multiple different reasoning datasets reveal that with our method, a 175 billion parameter LM (text-davinci-003) can produce competitive or even better performance, compared to its orders-of-magnitude larger successor, GPT-4. Additionally, we show that DaSLaM is not limited by the solver's capabilities as a function of scale; e.g., solver LMs with diverse sizes give significant performance improvement with our solver-agnostic decomposition technique. Exhaustive ablation studies evince the superiority of our modular finetuning technique over exorbitantly large decomposer LLMs, based on prompting alone.Comment: EMNLP 202

    How did the discussion go: Discourse act classification in social media conversations

    Full text link
    We propose a novel attention based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question-answers, stance detection or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platform-dependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71\% and 66\% respectively to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how heterogeneous non-textual features like location, time, leaning of information etc. play their roles in charaterizing online discussions on Facebook
    corecore